14 research outputs found

    Lyapunov-Based Reinforcement Learning for Decentralized Multi-Agent Control

    Full text link
    Decentralized multi-agent control has broad applications, ranging from multi-robot cooperation to distributed sensor networks. In decentralized multi-agent control, systems are complex with unknown or highly uncertain dynamics, where traditional model-based control methods can hardly be applied. Compared with model-based control in control theory, deep reinforcement learning (DRL) is promising to learn the controller/policy from data without the knowing system dynamics. However, to directly apply DRL to decentralized multi-agent control is challenging, as interactions among agents make the learning environment non-stationary. More importantly, the existing multi-agent reinforcement learning (MARL) algorithms cannot ensure the closed-loop stability of a multi-agent system from a control-theoretic perspective, so the learned control polices are highly possible to generate abnormal or dangerous behaviors in real applications. Hence, without stability guarantee, the application of the existing MARL algorithms to real multi-agent systems is of great concern, e.g., UAVs, robots, and power systems, etc. In this paper, we aim to propose a new MARL algorithm for decentralized multi-agent control with a stability guarantee. The new MARL algorithm, termed as a multi-agent soft-actor critic (MASAC), is proposed under the well-known framework of "centralized-training-with-decentralized-execution". The closed-loop stability is guaranteed by the introduction of a stability constraint during the policy improvement in our MASAC algorithm. The stability constraint is designed based on Lyapunov's method in control theory. To demonstrate the effectiveness, we present a multi-agent navigation example to show the efficiency of the proposed MASAC algorithm.Comment: Accepted to The 2nd International Conference on Distributed Artificial Intelligenc

    9 Collaborative Robots for Infrastructure Security Applications

    No full text
    We discuss techniques towards using collaborative robots for infrastructure security applications. A vast number of critical facilities, including power plants, military bases, water plants, air fields, and so forth, must be protected against unauthorized intruders. A team of mobile robots working cooperatively can alleviate human resources and improve effectiveness from human fatigue and boredom. This chapter addresses this scenario by first presenting distributed sensing algorithms for robot localization and 3D map building. We then describe a multi-robot motion planning algorithm according to a patrolling and threat response scenario. Neural network based methods are used for planning a complete coverage patrolling path. A block diagram of the system integration of sensing and planning is presented towards a successful proof of principle demonstration. Previous approaches to similar scenarios have been greatly limited by their reliance on global positioning systems, the need for the manual construction of facility maps, and the need for humans to plan and specify the individual robot paths for the mission. The proposed approaches overcome these limits and enable the systems to be deployed autonomously without modifications to the operating environment. 9.
    corecore